Yesterday we talked about a different class of search algorithms which we
call local search. The idea there was that for certain problems the search
spaces are so high dimensional, branching factor a thousand to a million or
something like this, where you cannot actually afford to be systematic
because you reach the giga node barrier which is kind of up to a billion
nodes we can do somehow. Where you reach that barrier after a search
depth of three. Okay so the idea is to let go of systematisimty, we're letting
go of completeness that way as well and optimality anyway and do something less
memory demanding. The idea there is that you keep only one or a few
search nodes in memory, everything else you forget. So no fringe anymore or tiny
little teensy weensy little fringe and the class of algorithms we get
there is something we call local search algorithms and those get by without a
lot of memory but also don't give you things like repeated state checking
because you have to remember something for that or recording the solution path
that takes memory as well. So we just basically do these things for what we
call configuration problems. You just have a configuration like eight queens,
you have a solution, it's easy to check that this is indeed a solution but how
you came there we're not interested in and we don't know. Same thing for
factory floor out layout or so if you can tell this is a good solution then
you don't want to know what was my thought process for putting this machine
here, I don't care as long as it makes me money. Okay so integrated circuit
design all of those kind of things we do traditionally by local search and in a
way local search or search at all isn't really considered AI anymore because we
understand it well. So we'll play with eight queens, no we did play with eight
queens, this is not the mode I wanted, I thought it was this, let's see yeah that
works. Traveling salesman problem is a known hard problem which is in this
class and we looked at these things. Okay we looked at an algorithm, extremely
simple algorithm, we call it hill climbing because that's exactly what you
do. In every state you compute the successor states which there might be a
few if you have a branching factor of a thousand that is a thousand successor
states and then you pick the best one and greedily kind of follow the steepest
hill you can find with a hope to get to the overall summit in say Everest. Okay
we looked at a concrete example of eight queens and that is something where you
kind of see two things here, one is that it's still as important how we represent
the problem and this is a particularly nice problem representation where we
basically every board we represent as an eight topple of digits between one
and eight right we know that if there is two queens in one column then that's not
going to be a solution anyway so we can restrict then go for a very simple
representation and the idea here is that we compute for all the possible moves
and we restrict queens to move in their column we compute the number of
successor states though we have 64 minus 8 successor states here so we have a
what is it 56 dimensional problem branching factor of 56 and you can see
all the successor states here and we see that there's one two three four that are
optimal and we choose one of those and if you keep doing that you end up in a
local minimum which might not be a solution unfortunately okay so what do
we do we do random restarts and the general kind of phenomena you have is in
this picture we have local minima maxima global maxima we have flat regions which
might actually be maxima or not and when we get into any of these features we
don't quite know what to do and whenever we don't know what to do we do something
random in AI okay humans by the way do it too okay we're a little bit more
Presenters
Zugänglich über
Offener Zugang
Dauer
00:08:06 Min
Aufnahmedatum
2020-10-28
Hochgeladen am
2020-10-28 12:16:53
Sprache
en-US
Recap: Local Search (Part 1)
Main video on the topic in chapter 7 clip 11.